So ideally (at least in my case) is that I could just use the when parameter passed to the tap block to figure out where in the audio I'm at, but the when needs to be converted to player time in order to get a time that makes sense (I need to determine my time relative to the entire audio buffer I scheduled on the player node, not the buffer of the tap block).
Also when the playerNode is paused, the when parameter doesn't factor the paused time in. The tap block continues to fire while the player node is paused and the when time doesn't know anything about paused/stopped so any UI synchronization you do will just jump all over the place once you start pausing/resuming. -playerTimeForNodeTime: does know about all this, but....
I don't think it's safe to call any of the audio engine/player node APIs in the tap block without risking a deadlock. If I'm wrong about all this I'd be grateful for an education. The documentation seems a bit scarce, and the dev forums have been pretty quiet lately.
What I came up with now is to just sync my own atomic_bool with my calls that pause/play the player node. Read the atomic bool in the tap block instead of if (!playerNode.isPlaying) { return; }
Then to account for the node time/ player time situation.. every time I schedule a buffer, I reset a counter to 0 that's synchronized with the tap block. In the tap block I increment it on each invocation to compute sample time relative to the entire audio buffer. I only schedule one buffer at a time (for now). I suppose I would need to figure out a good place to reset the counter to 0 if I scheduled two buffers at the same time.
If the system ever stops my player node (on error or something) my tap block could be out of sync since my flag I use to track playerNode.isPlaying is not binded to 'the truth' but it's better than deadlocking.
If there is a cleaner way to achieve this, I'm all ears.
Post
Replies
Boosts
Views
Activity
Alll these calls invoke AVAudioNodeImplBase::GetAttachAndEngineLock internally so I don't think I can use them in the tap block. Grr.
The documentation states that playerTimeForNodeTime: will return nil if the player node is not playing. Not sure if I can call the tap block or if it's possible to run into the same issue (like reading isRunning on AVAudioEngine on the main thread and playerNode.isPlaying on another thread)
I was thinking of just using my own atomic bool and setting it when I play/pause the player node. Then in the tap block, just read that BOOL instead of reading from playerNode directly.
But I imagine it would be possible for the system to stop the player node. And my flag could be out of sync with the true playerNode.isPlaying property. Is there a notification for such an event?
So many questions.
This didn't resolve the issue. I just ran into an issue where simply invoking the isRunning getter in AVAudioEngine deadlocks.
My app started beach balling and I paused the debugger. The call stack looks like this:
#0 0x000000019391ca9c in __psynch_mutexwait ()
#1 0x00000001029a5100 in _pthread_mutex_firstfit_lock_wait ()
#2 0x00000001029a5014 in _pthread_mutex_firstfit_lock_slow ()
#3 0x00000001938928ec in std::__1::recursive_mutex::lock ()
#4 0x00000001ef8b87dc in -[AVAudioEngine isRunning] ()
So I have an if statement
if (!engine.isRunning)
{
// Do something
}
I do have a tap block installed on the player node. Inside the tap block I read:
if (!weakToStrongPlayerNode.isPlaying)
{
// Don't do anything in the tap block while the player node is paused.
return;
}
Which appears to be being accessed at the same time.
Thread 82 Queue : RealtimeMessenger.mServiceQueue (serial)
#0 0x000000019391d3c8 in __semwait_signal ()
#1 0x00000001937fc714 in nanosleep ()
#2 0x00000001938932f4 in std::__1::this_thread::sleep_for ()
#3 0x00000001ef89a498 in AVAudioNodeImplBase::GetAttachAndEngineLock ()
#4 0x00000001ef8aa57c in -[AVAudioPlayerNode isPlaying] ()
Is it not safe to check the playing state of the player node in the tap block? What about calls to methods like -playerTimeForNodeTime: ? Can I call these in the tap block?
So, I think the way to go is to make sure the engine isn't running before I connect the player node to the main mixer node. I will need to reconnect if the engine is paused/stopped before restarting the engine though, so I'm checking if (!isRunning) before connecting.
I think my mistake was accidentally connecting on the engine when it was already running (and was already connected).
I think the problem is that I called -connect:to:format: on the player node multiple times (already connected). So I'll just connect the player node to the main mixer node once and leave it. I think when I connect the player node multiple times it tears down a bunch of stuff as a side effect which looks like can cause a deadlock. Perhaps something related to the fact that I have a tap block?
Assuming this resolves the issue (need to test it more), is there any point where I should reconnect the player node to the main mixer node (as a part of error handling or a configuration change)?
Seems like this can be triggered in some other scenarios as well.
I have my MacBook Pro connected to an external display. That external display is also connected to another computer.
The external display input is currently active on the input for the other computer (not the MacBook Pro). Not saying that this matters (the input of the monitor), but it did make it obvious that the issue was happening again. Now my MacBook Pro is open and I'm clicking to open files in Finder, etc. and nothing seems to be happening. It's because macOS has the other monitor (the one that's technically connected to the other display, but not really visible) as the mainScreen so all windows are opening on the wrong screen. Not sure why main screen status didn't move to the laptop's display when clicking around. This occurrence didn't involve my app at all, though it's the same issue (main screen status not being given to the screen that seemingly should have it).
My bug report and TSI report seems to have been ghosted unfortunately. Still wondering if this is considered a bug and if there is any potential workaround for available for my app.
Looks like I needed to lower the buffer size when installing the tap block. Otherwise too many frames pass in between tap block invocations. Still not sure why a negative sample time is passed on the first call to the tap block when isSampleTimeValid is YES but that has no effect and is ignored by the code.
I got pulled away from this code for awhile. Coming back to it...I noticed that the when.sampleTime in the tap block just continues to accumulate, even across multiple -scheduleBuffer:atTime:options:completionCallbackType:completionHandler: calls (with a nil time passed to atTime:).
Now if I call scheduleBuffer:atTime:options:completionCallbackType:completionHandler: and pass in a start time created like this:
AVAudioTime *startTime = [[AVAudioTime alloc]initWithSampleTime:0 atRate:22050];
The when parameter in the tap block has the same sample time as the one returned by -playerTimeForNodeTime:.
The sampleTime values in the tap block still don't seem to be accurate? For example the first time it's fired when.sampleTime is negative (though isSampleTimeValid is YES?)
I have code I want to execute when certain ranges of the buffer are reach for example:
// in the tap block
if (!when.isSampleTimeValid) { return; }
if (!weakToStrongPlayerNode.isPlaying) { return; }
for (TimingInfo *timing in array)
{
if (when.sampleTime >= timing.startFrame
&& when.sampleTime <= timing.endFrame)
{
// do something, etc.
}
}
But the timing just appears to be off or I'm misunderstanding how I ought to interpret when in the tap block. To verify my Timing objects have expected values I used them to slice the audio buffer into separate .wav files and got results that are accurate but I'm having a hard time getting it to work from AVAudioEngine's tap block.
I'd like to know more about this. I have a QOS_CLASS_USER_INTERACTIVE queue dedicated to AVAudioEngline calls. Seems to work fine thus far but I do get this warning on a call to
[engine connect:playerNode
to:engine.mainMixerNode
format:buffer.format];
[Internal] Thread running at User-interactive quality-of-service class waiting on a lower QoS thread running at Default quality-of-service class. Investigate ways to avoid priority inversions.
If I change the queue priority to default the warning goes away but the higher QOS seems more appropriate for the task?
Hi, I think we should continue this with https://feedbackassistant.apple.com - please can you attach a minimal project and image that replicates the issue.
I already filed FB15114920. In that sample I just loaded the CMYK image using NSBitmapImageRep...if I remember correctly. But if you use that same source image in that sample project and try use the Accelerate APIs to convert it to RGB you'll get the same black box NSBitmapImageRep gives you.
Third time is the charm. Finally went through...
I filed FB15777063
In addition to the Feedback, I opened a TSI on this at the end of September. I haven't heard back.
Is this considered a bug or not? I can't think of any reason why the behavior I described would be intentional but if it is intentional I do think there should be a way for developer to opt out.
This was an intentional change in macOS Sequoia to limit the ability of key-logging malware to observe keys in other applications. The issue of concern was that shift+option can be used to generate alternate characters in passwords, such as Ø (shift-option-O).
There is no workaround; macOS Sequoia now requires that a hotkey registration use at least one modifier that is not shift or option.
I don't understand this explanation at all.
Why wouldn't you just not dispatch keyboard events when the user is interacting with password related UI? For example if an NSSecureTextField is editing in the active app...just don't post the keyboard event to listeners. When the user is not interacting with password related UI let them trigger their keyboard shortcuts because....that's what they actually want to do.